Goal of this Rmarkdown document
This Rmarkdown notebook contains a detailed documentation of how the analysis will be performed, provided with R code when informative. It not only contains the statistical model codes, but also the results they produce when applied to the pilot data. At the end of the section we report on a power analysis, providing some insights on how many participants we need to test.
Main Research Questions
Each of the analysis are tailored to provide statistically informed conclusions about the following research questions:
- Does the production of a gesture by Dutch learners of Spanish improve their L2 acoustic stress placement (i.e., are they more likely to stress the L2 target syllable)? (confirmatory)
- Does the production of a gesture by Dutch learners of Spanish affect their acoustic markers of L2 stress production (i.e., do they mark prominence more saliently)? (confirmatory)
- When L2 stress placement is especially taxing (i.e., when stress is orthographically marked and/or there is an L1/L2 stress mismatch), is gesture-speech synchrony affected? (confirmatory) ** 3a. If lexical stress in L2 is incorrectly placed, is the timing of gesture still temporally attracted to the L2 target? (exploratory) ** 3b. If lexical stress in L2 is correctly placed, is the timing of gesture still temporally attracted to the L1 target? (exploratory)
Initial descriptive checks of the data
Here we provide a descriptive overview of the syllable identifications relative to target (table 1). In the current pilot data the number of syllables identified by EasyAlign perfectly matched the targeted number of syllables, i.e., in 100% of the trials there were 0 differences in the number of syllable detected versus target.
(#tab:table01)
Table 1. A summary table of percentage of differences between syllables
Table 2 provides the percentages of different type of L2 stress placement matches and mismatches.
(#tab:table02)
Table 2. A summary table of percentage of stress match/mismatch types
| L2 correct |
same |
33.93 |
| L2 incorrect & L1 match |
same |
0.00 |
| L2 incorrect & L1 mismatch |
same |
16.07 |
| L2 correct |
difference |
44.05 |
| L2 incorrect & L1 match |
difference |
0.00 |
| L2 incorrect & L1 mismatch |
difference |
5.95 |
Main Confirmatory Analysis
For all analysis we will use mixed linear regressions with maximum likelihood estimation using R-package nlme. Our models will always have participant and trial ID as random variables. We will always try to fit random slopes, next to random intercepts. With the current pilot data however, adding random slopes resulted in non-converging models. Thus for all models reported we have participant and trial ID as random intercepts. We further report a Cohen’s D for our model predictors using R-package EMAtools. For interaction effects we will follow up with a post-hoc contrast analysis using R-package lsmeans, and we apply a Bonferroni correction for such multiple comparison tests.
Research question 1: Effect of gesture on stress timing
For the first analysis we simply assess whether the absolute difference in directional stress timing is different for the gesture versus no gesture condition, while also accounting for effects on timing due to stress L1/L2 difference and accentedness. If gesture improves stress timing, lower absolutized stress timings are to be expected (i.e., lower deviances from perfect synchrony).
Figure 3 upper panel. Effect of gesture versus no gesture on stress timing
We first construct a base model predicting the overall mean, with participant and trial ID random variables, and absolute stress timing as the dependent variable. This model is then compared to a model with stress difference + accentedness + gesture condition as main effects.
Code chunk 1. Model research question 1
D$accuracy <- abs(D$stressed_mistimingL2L1) #absolute deviation stress timing
#basemodel predicting the overall mean stress timing
model0 <- lme(accuracy~1, data = D, random = list(~1|ppn, ~1|target), method = "ML", na.action = na.exclude)
#alternative model with, stress, accentedness, and gesture versus no gesture as predictors
model1 <- lme(accuracy~stress+accent+condition, data = D, random = list(~1|ppn, ~1|target), method = "ML", na.action = na.exclude)
anovcomp01 <- anova(model0, model1) #test difference basemodel versus model 1
sum1 <- summary(model1)
Dmod1 <- lme.dscore(model1, D, type="nlme")
Click here for model 1 R output
```r
model1
```
```
## Linear mixed-effects model fit by maximum likelihood
## Data: D
## Log-likelihood: -2026.702
## Fixed: accuracy ~ stress + accent + condition
## (Intercept) stressdifference accentaccent present
## 67.964286 -19.797619 8.261905
## conditiongesture
## -21.285714
##
## Random effects:
## Formula: ~1 | ppn
## (Intercept)
## StdDev: 0.01303649
##
## Formula: ~1 | target %in% ppn
## (Intercept) Residual
## StdDev: 57.98303 85.65074
##
## Number of Observations: 336
## Number of Groups:
## ppn target %in% ppn
## 2 168
```
```r
sum1
```
```
## Linear mixed-effects model fit by maximum likelihood
## Data: D
## AIC BIC logLik
## 4067.404 4094.124 -2026.702
##
## Random effects:
## Formula: ~1 | ppn
## (Intercept)
## StdDev: 0.01303649
##
## Formula: ~1 | target %in% ppn
## (Intercept) Residual
## StdDev: 57.98303 85.65074
##
## Fixed effects: accuracy ~ stress + accent + condition
## Value Std.Error DF t-value p-value
## (Intercept) 67.96429 12.21253 167 5.565129 0.0000
## stressdifference -19.79762 13.01534 164 -1.521099 0.1302
## accentaccent present 8.26190 13.01534 164 0.634782 0.5265
## conditiongesture -21.28571 9.40139 167 -2.264103 0.0249
## Correlation:
## (Intr) strssd accntp
## stressdifference -0.533
## accentaccent present -0.533 0.000
## conditiongesture -0.385 0.000 0.000
##
## Standardized Within-Group Residuals:
## Min Q1 Med Q3 Max
## -1.6119396 -0.4080404 -0.2752586 -0.1043269 4.2310610
##
## Number of Observations: 336
## Number of Groups:
## ppn target %in% ppn
## 2 168
```
```r
Dmod1
```
```
## t df d
## stressdifference -1.5210995 164 -0.23755583
## accentaccent present 0.6347823 164 0.09913635
## conditiongesture -2.2641030 167 -0.35040310
```
Click here for model 1 summary
In our pilot data, the model with stress, accentedness, and gesture conditions as predictors outperformed the base model, change in Chi-sq (3) = 7.837, p = 0.050. The models results indicate a statistically reliable main effect of gesture vs. no gesture, b = -21.2857, t (167) = -2.2641, p = 0.025, Cohen’s D = -0.35. Stress difference was not a reliable main effect, b = -19.798, t (164) = -1.5211, p = 0.130, Cohen’s D = -0.238. Accent was not a reliable main effect, b = 8.2619, t (164) = 0.635, p = 0.526, Cohen’s D = 0.099.
Figure 3 lower panels. Effect of gesture versus no gesture ~ stress and accent
We will further assess this in a complex model we expand our analysis with the relevant stimuli conditions, as well as their interactions with the gesture condition. If the interactions are statistically reliable we will perform a post-hoc comparisons with R-package “lsmeans” with a bonferroni correction.
Code chunk 2. Model research question 1, with three way interaction
#alternative model with gesture versus no gesture as predictor
model2 <- lme(accuracy~condition*stress*accent,
data = D, random = list(~1|ppn, ~1|target), method = "ML", na.action = na.exclude)
anova(model1, model2) #test difference basemodel versus model 1
## Model df AIC BIC logLik Test L.Ratio p-value
## model1 1 7 4067.404 4094.124 -2026.702
## model2 2 11 4073.784 4115.772 -2025.892 1 vs 2 1.620135 0.8052
#summary model 3 post hoc
sum3 <- summary(model2)
posthoc3 <- lsmeans(model2, list(pairwise ~ condition|accent|stress), adjust="bonferroni")
Click here for model 3 summary
```r
sum3
```
```
## Linear mixed-effects model fit by maximum likelihood
## Data: D
## AIC BIC logLik
## 4073.784 4115.772 -2025.892
##
## Random effects:
## Formula: ~1 | ppn
## (Intercept)
## StdDev: 0.009792346
##
## Formula: ~1 | target %in% ppn
## (Intercept) Residual
## StdDev: 58.03803 85.35406
##
## Fixed effects: accuracy ~ condition * stress * accent
## Value Std.Error DF
## (Intercept) 67.40476 16.11977 164
## conditiongesture -11.45238 18.85156 164
## stressdifference -28.52381 22.79680 163
## accentaccent present 7.88095 22.79680 163
## conditiongesture:stressdifference 0.02381 26.66013 164
## conditiongesture:accentaccent present -16.66667 26.66013 164
## stressdifference:accentaccent present 20.45238 32.23954 163
## conditiongesture:stressdifference:accentaccent present -6.04762 37.70312 164
## t-value p-value
## (Intercept) 4.181496 0.0000
## conditiongesture -0.607503 0.5444
## stressdifference -1.251220 0.2126
## accentaccent present 0.345704 0.7300
## conditiongesture:stressdifference 0.000893 0.9993
## conditiongesture:accentaccent present -0.625153 0.5327
## stressdifference:accentaccent present 0.634388 0.5267
## conditiongesture:stressdifference:accentaccent present -0.160401 0.8728
## Correlation:
## (Intr) cndtng strssd
## conditiongesture -0.585
## stressdifference -0.707 0.413
## accentaccent present -0.707 0.413 0.500
## conditiongesture:stressdifference 0.413 -0.707 -0.585
## conditiongesture:accentaccent present 0.413 -0.707 -0.292
## stressdifference:accentaccent present 0.500 -0.292 -0.707
## conditiongesture:stressdifference:accentaccent present -0.292 0.500 0.413
## accntp cndtn: cndt:p
## conditiongesture
## stressdifference
## accentaccent present
## conditiongesture:stressdifference -0.292
## conditiongesture:accentaccent present -0.585 0.500
## stressdifference:accentaccent present -0.707 0.413 0.413
## conditiongesture:stressdifference:accentaccent present 0.413 -0.707 -0.707
## strs:p
## conditiongesture
## stressdifference
## accentaccent present
## conditiongesture:stressdifference
## conditiongesture:accentaccent present
## stressdifference:accentaccent present
## conditiongesture:stressdifference:accentaccent present -0.585
##
## Standardized Within-Group Residuals:
## Min Q1 Med Q3 Max
## -1.5871742 -0.5052313 -0.2389013 -0.1052169 4.3243362
##
## Number of Observations: 336
## Number of Groups:
## ppn target %in% ppn
## 2 168
```
Click here for posth-hoc3 output
```r
posthoc3
```
```
## $`lsmeans of condition | accent, stress`
## accent = no accent, stress = same:
## condition lsmean SE df lower.CL upper.CL
## nogesture 67.4 16.1 1 -137 272
## gesture 56.0 16.1 1 -149 261
##
## accent = accent present, stress = same:
## condition lsmean SE df lower.CL upper.CL
## nogesture 75.3 16.1 1 -130 280
## gesture 47.2 16.1 1 -158 252
##
## accent = no accent, stress = difference:
## condition lsmean SE df lower.CL upper.CL
## nogesture 38.9 16.1 1 -166 244
## gesture 27.5 16.1 1 -177 232
##
## accent = accent present, stress = difference:
## condition lsmean SE df lower.CL upper.CL
## nogesture 67.2 16.1 1 -138 272
## gesture 33.1 16.1 1 -172 238
##
## Degrees-of-freedom method: containment
## Confidence level used: 0.95
##
## $`pairwise differences of condition | accent, stress`
## accent = no accent, stress = same:
## 3 estimate SE df t.ratio p.value
## nogesture - gesture 11.5 18.9 164 0.608 0.5444
##
## accent = accent present, stress = same:
## 3 estimate SE df t.ratio p.value
## nogesture - gesture 28.1 18.9 164 1.492 0.1377
##
## accent = no accent, stress = difference:
## 3 estimate SE df t.ratio p.value
## nogesture - gesture 11.4 18.9 164 0.606 0.5452
##
## accent = accent present, stress = difference:
## 3 estimate SE df t.ratio p.value
## nogesture - gesture 34.1 18.9 164 1.811 0.0719
##
## Degrees-of-freedom method: containment
```
Prosodic modulation of gesture
Does gesture vs. no gesture affect acoustic markers of stress? We perform a mixed linear regression with normalized acoustic markers as DV, and acoustic marker (peak F0, peak envelope, and duration) x condition as independent variable.
Figure 4. Effect of gesture vs. no gesture on acoustic markers of stress
Code chunk 3. Gesture and acoustic output
Dlong <- gather(D, "marker", "acoust_out", 13:15)
#alternative model with gesture versus no gesture as predictor
model0 <- lme(acoust_out~1, data = Dlong, random = list(~1|ppn, ~1|target), method = "ML", na.action = na.exclude)
model1 <- lme(acoust_out~marker*condition, data = Dlong, random = list(~1|ppn, ~1|target), method = "ML", na.action = na.exclude)
anova(model0, model1) #test difference basemodel versus model 1
## Model df AIC BIC logLik Test L.Ratio p-value
## model0 1 4 1951.533 1971.196 -971.7666
## model1 2 9 1534.941 1579.182 -758.4703 1 vs 2 426.5926 <.0001
#summary model 3 post hoc
anovamod0mod1 <- anova(model0, model1)
sum1 <- summary(model1)
posthocsum1 <- lsmeans(model1, list(pairwise ~ condition|marker), adjust="bonferroni")
Dmod1 <- lme.dscore(model1, Dlong, type="nlme")
Click here for model 1 R output
```r
sum1
```
```
## Linear mixed-effects model fit by maximum likelihood
## Data: Dlong
## AIC BIC logLik
## 1534.941 1579.182 -758.4703
##
## Random effects:
## Formula: ~1 | ppn
## (Intercept)
## StdDev: 1.375216e-05
##
## Formula: ~1 | target %in% ppn
## (Intercept) Residual
## StdDev: 0.08539542 0.506822
##
## Fixed effects: acoust_out ~ marker * condition
## Value Std.Error DF t-value p-value
## (Intercept) 1.5512255 0.03977187 835 39.00308 0.0000
## markerpeakF0z -0.5074599 0.05546413 835 -9.14933 0.0000
## markersDURz -0.7038124 0.05546413 835 -12.68951 0.0000
## conditiongesture 0.1850414 0.05546413 835 3.33624 0.0009
## markerpeakF0z:conditiongesture -0.1713849 0.07843812 835 -2.18497 0.0292
## markersDURz:conditiongesture -0.3458323 0.07843812 835 -4.40898 0.0000
## Correlation:
## (Intr) mrkrF0 mrkDUR cndtng mrkF0:
## markerpeakF0z -0.697
## markersDURz -0.697 0.500
## conditiongesture -0.697 0.500 0.500
## markerpeakF0z:conditiongesture 0.493 -0.707 -0.354 -0.707
## markersDURz:conditiongesture 0.493 -0.354 -0.707 -0.707 0.500
##
## Standardized Within-Group Residuals:
## Min Q1 Med Q3 Max
## -4.32238038 -0.52000609 0.00753372 0.54513919 4.64779877
##
## Number of Observations: 1008
## Number of Groups:
## ppn target %in% ppn
## 2 168
```
Click here for model 1 summary for research question 2
Does gesture vs. no gesture affect acoustic markers of stress? We perform a mixed linear regression with normalized acoustic output as DV, and acoustic marker (peak F0z, peak envelope, and duration) x condition as independent variable. We again test this model against a base model predicting the overall mean.
The model with acoustic markers x condition was a more reliable model than the base model predicting the overall mean of the acoustic output, Chi-sq (5) = 426.593, p = 0.000. Table 3 provides an overview of the model predictors.
(#tab:table03)
Table 3. Model fitted predictions
| (Intercept) |
1.55 |
0.04 |
835.00 |
39.00 |
0.00 |
NA |
| markerpeakF0z |
-0.51 |
0.06 |
835.00 |
-9.15 |
0.00 |
-0.63 |
| markersDURz |
-0.70 |
0.06 |
835.00 |
-12.69 |
0.00 |
-0.88 |
| conditiongesture |
0.19 |
0.06 |
835.00 |
3.34 |
0.00 |
0.23 |
| markerpeakF0z:conditiongesture |
-0.17 |
0.08 |
835.00 |
-2.18 |
0.03 |
-0.15 |
| markersDURz:conditiongesture |
-0.35 |
0.08 |
835.00 |
-4.41 |
0.00 |
-0.31 |
We will further perform a post-hoc analysis disentangling these interaction effects, where we assess for which acoustic marker gesture vs. no gesture affected acoustic output.
Click here for posthoc model 1 output
```r
posthocsum1
```
```
## $`lsmeans of condition | marker`
## marker = peakAMPz:
## condition lsmean SE df lower.CL upper.CL
## nogesture 1.551 0.0398 1 1.046 2.06
## gesture 1.736 0.0398 1 1.231 2.24
##
## marker = peakF0z:
## condition lsmean SE df lower.CL upper.CL
## nogesture 1.044 0.0398 1 0.538 1.55
## gesture 1.057 0.0398 1 0.552 1.56
##
## marker = sDURz:
## condition lsmean SE df lower.CL upper.CL
## nogesture 0.847 0.0398 1 0.342 1.35
## gesture 0.687 0.0398 1 0.181 1.19
##
## Degrees-of-freedom method: containment
## Confidence level used: 0.95
##
## $`pairwise differences of condition | marker`
## marker = peakAMPz:
## 2 estimate SE df t.ratio p.value
## nogesture - gesture -0.1850 0.0555 835 -3.336 0.0009
##
## marker = peakF0z:
## 2 estimate SE df t.ratio p.value
## nogesture - gesture -0.0137 0.0555 835 -0.246 0.8056
##
## marker = sDURz:
## 2 estimate SE df t.ratio p.value
## nogesture - gesture 0.1608 0.0555 835 2.899 0.0038
##
## Degrees-of-freedom method: containment
```
Research question 3: Gesture-speech asynchrony as a function of trial conditions
From the previous analyses we should know whether stress timing performance and acoustic stress marking increases or decreases as a function of gesture, as well as the possible role of stress difference, and accentedness in stress timing. A further question is whether the timing between gesture and speech is affected by stress difference and accentedness, which would signal that gesture does not simply always synchronize with speech, but that coordination is destabilized due to difficulties of reaching the L2 targets without orthographic cues or with L1 stress competitor.
Using a similar linear mixed modeling approach as the previous analysis we compare a base model with models with stress difference and accentedness (and their possible interactions) as predictors for the absolutized gesture-speech asynchrony.
Figure 5. Gesture-speech (a)synchrony depending on stress difference and accentedness
Click here for posthoc model 1 and 2 output
```r
sum1
```
```
## Linear mixed-effects model fit by maximum likelihood
## Data: subD
## AIC BIC logLik
## 2047.489 2066.233 -1017.744
##
## Random effects:
## Formula: ~1 | ppn
## (Intercept)
## StdDev: 0.004390321
##
## Formula: ~1 | target %in% ppn
## (Intercept) Residual
## StdDev: 103.4332 1.721462
##
## Fixed effects: abs_asynchrony ~ stress + accent
## Value Std.Error DF t-value p-value
## (Intercept) 88.90476 13.94886 164 6.373623 0.0000
## stressdifference 14.71429 16.10675 164 0.913548 0.3623
## accentaccent present -10.09524 16.10675 164 -0.626771 0.5317
## Correlation:
## (Intr) strssd
## stressdifference -0.577
## accentaccent present -0.577 0.000
##
## Standardized Within-Group Residuals:
## Min Q1 Med Q3 Max
## -0.016668525 -0.011084262 -0.007009053 0.007736769 0.084032038
##
## Number of Observations: 168
## Number of Groups:
## ppn target %in% ppn
## 2 168
```
```r
sum2
```
```
## Linear mixed-effects model fit by maximum likelihood
## Data: subD
## AIC BIC logLik
## 2049.444 2071.312 -1017.722
##
## Random effects:
## Formula: ~1 | ppn
## (Intercept)
## StdDev: 0.0044314
##
## Formula: ~1 | target %in% ppn
## (Intercept) Residual
## StdDev: 103.4192 1.729301
##
## Fixed effects: abs_asynchrony ~ stress * accent
## Value Std.Error DF t-value p-value
## (Intercept) 87.21429 16.15363 163 5.399053 0.0000
## stressdifference 18.09524 22.84468 163 0.792099 0.4295
## accentaccent present -6.71429 22.84468 163 -0.293910 0.7692
## stressdifference:accentaccent present -6.76190 32.30725 163 -0.209300 0.8345
## Correlation:
## (Intr) strssd accntp
## stressdifference -0.707
## accentaccent present -0.707 0.500
## stressdifference:accentaccent present 0.500 -0.707 -0.707
##
## Standardized Within-Group Residuals:
## Min Q1 Med Q3 Max
## -0.017022149 -0.010997239 -0.007258370 0.008047324 0.084164000
##
## Number of Observations: 168
## Number of Groups:
## ppn target %in% ppn
## 2 168
```
```r
posthoc2
```
```
## $`lsmeans of stress | accent`
## accent = no accent:
## stress lsmean SE df lower.CL upper.CL
## same 87.2 16.2 1 -118.0 292
## difference 105.3 16.2 1 -99.9 311
##
## accent = accent present:
## stress lsmean SE df lower.CL upper.CL
## same 80.5 16.2 1 -124.8 286
## difference 91.8 16.2 1 -113.4 297
##
## Degrees-of-freedom method: containment
## Confidence level used: 0.95
##
## $`pairwise differences of stress | accent`
## accent = no accent:
## 2 estimate SE df t.ratio p.value
## same - difference -18.1 22.8 163 -0.792 0.4295
##
## accent = accent present:
## 2 estimate SE df t.ratio p.value
## same - difference -11.3 22.8 163 -0.496 0.6205
##
## Degrees-of-freedom method: containment
```
Click here for model 2 summary for research question 3
For our pilot data, including stress difference and accentedness as predictors in an alternative model was not a more reliable than the base model predicting the overall mean of the absolutized gesture-speech (a)synchrony, Chi-sq (2) = 1.245, p = 0.537, and adding interactions between stress difference and accentedness also did not further improve predictions of gesture-speech asynchrony, Chi-sq (3) = 1.290, p = 0.732. Table 4 provides an overview of the model predictors for the model without interactions.
(#tab:table04)
Table 4. Model fitted predictions
| (Intercept) |
88.90 |
13.95 |
164.00 |
6.37 |
0.00 |
NA |
| stressdifference |
14.71 |
16.11 |
164.00 |
0.91 |
0.36 |
0.14 |
| accentaccent present |
-10.10 |
16.11 |
164.00 |
-0.63 |
0.53 |
-0.10 |
Gesture-speech asynchrony and the directionality of error
From the previous analysis we will know if gesture-speech synchrony can be affected by trial conditions that may complicate correct stress placement. If indeed gesture-speech synchrony is affected, we can wonder about how gesture and speech diverge when they are more asynchronous.
#basemodel predicting the overall mean accuracy
model0 <- lme(asynchrony_L2L1~1, data = subD, random = list(~1|ppn, ~1|target), method = "ML", na.action = na.exclude)
#alternative model with gesture versus no gesture as predictor
model1 <- lme(asynchrony_L2L1~stress*correct, data = subD, random = list(~1|ppn, ~1|target), method = "ML", na.action = na.exclude)
anova(model0, model1) #test difference basemodel versus model 1
## Model df AIC BIC logLik Test L.Ratio p-value
## model0 1 4 2133.622 2146.118 -1062.811
## model1 2 7 2126.648 2148.515 -1056.324 1 vs 2 12.97427 0.0047
summary(model1)
## Linear mixed-effects model fit by maximum likelihood
## Data: subD
## AIC BIC logLik
## 2126.648 2148.515 -1056.324
##
## Random effects:
## Formula: ~1 | ppn
## (Intercept)
## StdDev: 0.007961735
##
## Formula: ~1 | target %in% ppn
## (Intercept) Residual
## StdDev: 130.1349 2.115984
##
## Fixed effects: asynchrony_L2L1 ~ stress * correct
## Value Std.Error DF
## (Intercept) 14.14516 16.72969 163
## stressdifference 30.74958 22.54347 163
## correctL2 incorrect & L1 mismatch 49.44575 32.69010 163
## stressdifference:correctL2 incorrect & L1 mismatch -210.84048 58.87324 163
## t-value p-value
## (Intercept) 0.845512 0.3991
## stressdifference 1.364013 0.1744
## correctL2 incorrect & L1 mismatch 1.512560 0.1323
## stressdifference:correctL2 incorrect & L1 mismatch -3.581262 0.0005
## Correlation:
## (Intr) strssd cLi&Lm
## stressdifference -0.742
## correctL2 incorrect & L1 mismatch -0.512 0.380
## stressdifference:correctL2 incorrect & L1 mismatch 0.284 -0.383 -0.555
##
## Standardized Within-Group Residuals:
## Min Q1 Med Q3 Max
## -0.063643564 -0.007597103 -0.002391494 0.008405780 0.050266435
##
## Number of Observations: 168
## Number of Groups:
## ppn target %in% ppn
## 2 168
Power analysis
To provide some indication on the amount of data we need to collect to get meaningful results, we will perform a power analysis concerning the first research question. We will assess a complex three way interaction model, and identify how many subjects we need to detect a main and interaction effects including gesture condition at a power of 80%. We use R-package mixedpower to determine this
#for details on this power analysis see
https://link.springer.com/article/10.3758/s13428-021-01546-0
# load library
library(mixedpower)
library(doParallel)
#main DV of interest
D$accuracy <- abs(D$stressed_mistimingL2L1) #absolute deviation from stress from L2
D$ppn <- as.numeric(as.factor(D$ppn)) #random variable to extend
#lme4 model instead of lme
model1 <- lmer(accuracy~accent+stress+condition +(1|ppn) + (1|target), data = D, na.action = na.exclude)